generative deep learning
Reliable Physiological Monitoring on the Wrist Using Generative Deep Learning to Address Poor Skin-Sensor Contact
Hung, Manh Pham, Ho, Matthew Yiwen, Zhang, Yiming, Spathis, Dimitris, Saeed, Aaqib, Ma, Dong
Photoplethysmography (PPG) is a widely adopted, non-invasive technique for monitoring cardiovascular health and physiological parameters in both consumer and clinical settings. While motion artifacts in dynamic environments have been extensively studied, suboptimal skin-sensor contact in sedentary conditions - a critical yet underexplored issue - can distort PPG waveform morphology, leading to the loss or misalignment of key features and compromising sensing accuracy. In this work, we propose CP-PPG, a novel framework that transforms Contact Pressure-distorted PPG signals into high-fidelity waveforms with ideal morphology. CP-PPG integrates a custom data collection protocol, a carefully designed signal processing pipeline, and a novel deep adversarial model trained with a custom PPG-aware loss function. We validated CP-PPG through comprehensive evaluations, including 1) morphology transformation performance on our self-collected dataset, 2) downstream physiological monitoring performance on public datasets, and 3) in-the-wild study. Extensive experiments demonstrate substantial and consistent improvements in signal fidelity (Mean Absolute Error: 0.09, 40% improvement over the original signal) as well as downstream performance across all evaluations in Heart Rate (HR), Heart Rate Variability (HRV), Respiration Rate (RR), and Blood Pressure (BP) estimation (on average, 21% improvement in HR; 41-46% in HRV; 6% in RR; and 4-5% in BP). These findings highlight the critical importance of addressing skin-sensor contact issues to enhance the reliability and effectiveness of PPG-based physiological monitoring. CP-PPG thus holds significant potential to improve the accuracy of wearable health technologies in clinical and consumer applications.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Singapore (0.05)
- North America > United States (0.04)
- (2 more...)
Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play: Foster, David: 9781492041948: Amazon.com: Books
This book covers the key techniques that have dominated the generative modeling landscape in recent years and have allowed us to make impressive progress in creative tasks. As well as covering core generative modeling theory, we will be building full working examples of some of the key models from the literature and walking through the codebase for each, step by step. Throughout the book, you will find short, allegorical stories that help explain the mechanics of some of the models we will be building. I believe that one of the best ways to teach a new abstract theory is to first convert it into something that isn't quite so abstract, such as a story, before diving into the technical explanation. The individual steps of the theory are clearer within this context because they involve people, actions, and emotions, all of which are well understood, rather than abstract constructs such as neural networks, backpropagation, and loss functions.
DALL-E 2 shows the power of generative deep learning, but raises dispute over AI practices
This article is part of our coverage of the latest in AI research. Artificial intelligence research lab OpenAI made headlines again, this time with DALL-E 2, a machine learning model that can generate stunning images from text descriptions. DALL-E 2 builds on the success of its predecessor DALL-E and improves the quality and resolution of the output images thanks to advanced deep learning techniques. The announcement of DALL-E 2 was accompanied by a social media campaign by OpenAI's engineers and its CEO, Sam Altman, who shared wonderful photos created by the generative machine learning model on Twitter. DALL-E 2 shows how far the AI research community has come toward harnessing the power of deep learning and addressing some of its limits.
Machine Learning books with complete reviews: The best list for 2021!
Machine learning books are a great resource to pump up your knowledge, and in our experience usually explain things better and deeper than online courses or MOOCs. Once you are comfortable with Python and with Data Analysis using its main libraries, it is time to enter the fantastic world of Machine Learning: Predictive models, applications, algorithms, and much more. There are a lot of books out there that try to teach you Machine Learning; here we have only listed some of the best ones. Before getting into more extensive coding ML books, we wanted to offer a book that is more related towards giving the readers an understanding of the main topics of Machine Learning and artificial intelligence in an elegant, clear, and concise manner. Although there is code and maths in the book, the goal of the 100 Page Machine Learning book by Andriy Burkov is to provide a common ground for any kind of person with an STEM background to meet the wonderful world of Data Science. It covers an amazing variety of topics but not in the depth that might be offered by other books (take into account it is only a little more than 100 pages), but it does so in a simple and clear manner, and it is useful for Machine Learning practitioners as well as for newcomers to the field.
- Summary/Review (0.91)
- Instructional Material > Course Syllabus & Notes (0.48)
- Education > Educational Setting > Online (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.54)
Active Divergence with Generative Deep Learning -- A Survey and Taxonomy
Broad, Terence, Berns, Sebastian, Colton, Simon, Grierson, Mick
Generative deep learning systems offer powerful tools for artefact generation, given their ability to model distributions of data and generate high-fidelity results. In the context of computational creativity, however, a major shortcoming is that they are unable to explicitly diverge from the training data in creative ways and are limited to fitting the target data distribution. To address these limitations, there have been a growing number of approaches for optimising, hacking and rewriting these models in order to actively diverge from the training data. We present a taxonomy and comprehensive survey of the state of the art of active divergence techniques, highlighting the potential for computational creativity researchers to advance these methods and use deep generative models in truly creative systems.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.04)
- (2 more...)
Copyright in Generative Deep Learning
Franceschelli, Giorgio, Musolesi, Mirco
Machine-generated artworks are now part of the contemporary art scene: they are attracting significant investments and they are presented in exhibitions together with those created by human artists. These artworks are mainly based on generative deep learning techniques. Also given their success, several legal problems arise when working with these techniques. In this article we consider a set of key questions in the area of generative deep learning for the arts. Is it possible to use copyrighted works as training set for generative models? How do we legally store their copies in order to perform the training process? And then, who (if someone) will own the copyright on the generated data? We try to answer these questions considering the law in force in both US and EU and the future alternatives, trying to define a set of guidelines for artists and developers working on deep learning generated art.
- Europe (0.94)
- North America > United States (0.28)
- Law > Intellectual Property & Technology Law (1.00)
- Government (1.00)
- Leisure & Entertainment > Games (0.93)
#LondonAI Feb Meetup: Operational AI, Best Coding Practices, and Generative DL
Sometimes these notebooks find their way into production, but their code and structure are often far from ideal. In this session, we cover some best practices around creating and operationalising notebooks. We will talk about structure, code style, refactoring in notebooks, unit testing, reproducibility and more. Nikolay Manchev is a machine learning enthusiast and speaker. His area of expertise is Machine Learning and Data Science, and his research interests are in neural networks with emphasis on biological plausibility. Nikolay was a Senior Data Scientist and Developer Advocate at IBM [masked]) and currently acts as the Principal Data Scientist for EMEA at Domino Data Lab. Talk 3: Generative Deep Learning - The Key To Unlocking Artificial General Intelligence by David Foster Generative modelling is one of the hottest topics in AI. It's now possible to teach a machine to excel at human endeavours such as painting, writing, and composing music. In this talk, we will cover: - A general introduction to Generative Modelling - A walkthrough of one of the most utilised generative deep learning models - the Variational Autoencoder (VAE) - Examples of state-of-the-art output from Generative Adversarial Networks (GANs) and Transformer based architectures.
- Europe > Czechia > Prague (0.09)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.06)
Personalized, Generative Narratives
For my capstone project at Metis, I decided to continue the work I was doing with natural language, film characterization and IBM Watson personality insights. In my previous project, I had successfully visualized the big five personality profiles of film characters across a single genre. However, for this project, I decided to take the work several steps further. Before embarking on my journey to become a data scientist, my work was primarily rooted in the immersive experience design industry. While I have been in the industry for nearly a decade, in 2015 I founded my own immersive experience company called Screenshot Productions.
- North America > United States > California > Santa Cruz County > Santa Cruz (0.05)
- Europe > Italy (0.05)
Generative Deep Learning
Generative modeling is one of the hottest topics in AI. It's now possible to teach a machine to excel at human endeavors such as painting, writing, and composing music. With this practical book, machine-learning engineers and data scientists will discover how to re-create some of the most impressive examples of generative deep learning models, such as variational autoencoders,generative adversarial networks (GANs), encoder-decoder models, and world models. Author David Foster demonstrates the inner workings of each technique, starting with the basics of deep learning before advancing to some of the most cutting-edge algorithms in the field. Through tips and tricks, you'll understand how to make your models learn more efficiently and become more creative.